23 research outputs found

    Incremental Optimization Transfer Algorithms: Application to Transmission Tomography

    Full text link
    No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters (Ahn and Fessler, 2003), and methods based on the incremental expectation maximization (EM) approach (Hsiao et al., 2002). This paper generalizes the incremental EM approach by introducing a general framework that we call “incremental optimization transfer.” Like incremental EM methods, the proposed algorithms accelerate convergence speeds and ensure global convergence (to a stationary point) under mild regularity conditions without requiring inconvenient relaxation parameters. The general optimization transfer framework enables the use of a very broad family of non-EM surrogate functions. In particular, this paper provides the first convergent OS-type algorithm for transmission tomography. The general approach is applicable to both monoenergetic and polyenergetic transmission scans as well as to other image reconstruction problems. We propose a particular incremental optimization transfer method for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS). Results show that the new “transmission incremental optimization transfer (TRIOT)” algorithm is faster than nonincremental ordinary SPS and even OS-SPS yet is convergent.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85800/1/Fessler200.pd

    Convergent Incremental Optimization Transfer Algorithms: Application to Tomography

    Full text link
    No convergent ordered subsets (OS) type image reconstruction algorithms for transmission tomography have been proposed to date. In contrast, in emission tomography, there are two known families of convergent OS algorithms: methods that use relaxation parameters , and methods based on the incremental expectation-maximization (EM) approach . This paper generalizes the incremental EM approach by introducing a general framework, "incremental optimization transfer". The proposed algorithms accelerate convergence speeds and ensure global convergence without requiring relaxation parameters. The general optimization transfer framework allows the use of a very broad family of surrogate functions, enabling the development of new algorithms . This paper provides the first convergent OS-type algorithm for (nonconcave) penalized-likelihood (PL) transmission image reconstruction by using separable paraboloidal surrogates (SPS) which yield closed-form maximization steps. We found it is very effective to achieve fast convergence rates by starting with an OS algorithm with a large number of subsets and switching to the new "transmission incremental optimization transfer (TRIOT)" algorithm. Results show that TRIOT is faster in increasing the PL objective than nonincremental ordinary SPS and even OS-SPS yet is convergent.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85980/1/Fessler46.pd

    Asymptotic Characterization of Log-Likelihood Maximization Based Algorithms and Applications

    No full text
    The asymptotic distribution of estimates that are based on a sub-optimal search for the maximum of the log-likelihood function is considered. In particular, estimation schemes that are based on a two-stage approach, in which an initial estimate is used as the starting point of a subsequent local maximization, are analyzed. We show that asymptotically the local estimates follow a Gaussian mixture distribution, where the mixture components correspond to the modes of the likelihood function. The analysis is relevant for cases where the log-likelihood function is known to have local maxima in addition to the global maximum, and there is no available method that is guaranteed to provide an estimate within the attraction region of the global maximum. Two application

    TESTS FOR GLOBAL MAXIMUM OF THE LIKELIHOOD FUNCTION

    No full text
    Given a relative maximum of the log-likelihood function, how to assess whether it is the global maximum? This paper investigates a statistical tool, which answers this question by posing it as a hypothesis testing problem. A general framework for constructing tests for global maximum is given. The characteristics of the tests are investigated for two cases: correctly specified model and model mismatch. A finite sample approximation to the power is given, which gives a tool for performance prediction and a measure for comparison between tests. The tests are illustrated for two applications: estimating the parameters of a Gaussian mixture model and direction finding using an array of sensors- practical problems that are known to suffer from local maxima. 1

    Distributed maximum likelihood estimation for sensor networks

    No full text
    The problem of finding the maximum likelihood estimator of a commonly observed model, based on data collected by a sensor network under power and bandwidth constraints is considered. In particular, a case where the sensors cannot fully share their data is treated. An iterative algorithm that relaxes the requirement of sharing all the data is given. The algorithm is based on a local Fisher scoring method and an iterative information sharing procedure. The case where the sensors share sub-optimal estimates is also analyzed. The asymptotic distribution of the estimates is derived and used to provide means of discrimination between estimates that are associated with different local maxima of the loglikelihood function. The results are validated by a simulation. 1

    Sensor network source localization via projection onto convex sets (POCS

    No full text
    This paper addresses the problem of locating an acoustic source using a sensor network in a distributed manner, i.e., without transmitting the full data set to a central point for processing. This problem has been traditionally addressed through the nonlinear least squares or maximum likelihood framework. These methods, even though asymptotically optimal under certain conditions, pose adifficult global optimization problem. It is shown that the associated objective function may have multiple local optima and saddle points and hence any local search method might stagnate at a sub-optimal solution. In this paper, we formulate the problem as a convex feasibility problem and apply a distributed version of the projection onto convex sets (POCS) method. We give a closed form expression for the projection phase, which usually constitutes the heaviest computational aspect of POCS. Conditions are given under which, when the number of samples increases to infinity or in the absence of measurement noise, the convex feasibility problem has a unique solution at the true source location. In general, the method converges to a limit point or a limit cycle in the neighborhood of the true location. Simulation results show convergence to the global optimum with extremely fast convergence rates compared to the previous methods. 1

    APOCS: A Rapidly Convergent Source Localization Algorithm for Sensor Networks

    No full text
    This paper addresses the problem of locating an acoustic source using a sensor network in a distributed manner, i.e., without transmitting the full data set to a central point for processing. This problem has been traditionally addressed through the maximum likelihood framework or nonlinear least squares. These methods, even though asymptotically optimal under certain conditions, pose a difficult global optimization problem. It is shown that the associated objective function may have multiple local optima and hence local search methods might stagnate at a sub-optimal solution. In this paper, we treat the problem in its convex feasibility formulation. We propose the aggregated projection onto convex sets (APOCS) method, which, in contrast to the original POCS method, converges to a meaningful limit even when the problem is infeasible without requiring a diminishing step size. Simulation results show convergence to the global optimum with significantly faster convergence rates compared to the previous methods. 1
    corecore